Different video understanding tasks are typically treated in isolation, and even with distinct types of curated data (e.g., classifying sports in one dataset, tracking animals in another). However, in wearable cameras, the immersive egocentric perspective of a person engaging with the world around them presents an interconnected web of video understanding tasks -- hand-object manipulations, navigation in the space, or human-human interactions -- that unfold continuously, driven by the person's goals. We argue that this calls for a much more unified approach. We propose EgoTask Translation (EgoT2), which takes a collection of models optimized on separate tasks and learns to translate their outputs for improved performance on any or all of them at once. Unlike traditional transfer or multi-task learning, EgoT2's flipped design entails separate task-specific backbones and a task translator shared across all tasks, which captures synergies between even heterogeneous tasks and mitigates task competition. Demonstrating our model on a wide array of video tasks from Ego4D, we show its advantages over existing transfer paradigms and achieve top-ranked results on four of the Ego4D 2022 benchmark challenges.
translated by 谷歌翻译
最近,深度学习方法已成功地用于解决数字病理领域的众多挑战。但是,其中许多方法都是完全监督的,需要带注释的图像。对组织学的注释图像对于即使是高技能病理学家来说也是一个耗时且乏味的过程,因此,大多数组织学数据集缺乏利益区域的注释,并且标记弱。在本文中,我们介绍了Historoperm,这是一种旨在提高弱监督环境中组织学图像的表示技术的性能的视图生成方法。在组织培训中,我们列出了从整体组织学图像产生的斑块的增强视图,以提高分类精度。这些排列的视图属于相同的原始幻灯片级别,但是由不同的贴片实例产生的。我们在两个公共组织学数据集和肾细胞癌的两个公共组织学数据集上测试了BYOL和SIMCLR添加组织培训。对于两个数据集,我们发现与标准BYOL和SIMCLR方法相比,在准确性,F1得分和AUC方面的性能都得到了改善。特别是,在线性评估构型中,HistoPerm将BYOL的腹腔疾病数据集的分类精度提高了8%,SIMCLR的分类精度增加了3%。同样,在组织培训的情况下,BYOL的分类精度增加了2%,而SIMCLR在肾细胞癌数据集上的精度增加了0.25%。可以在共同表示学习框架中采用拟议的基于置换的视图生成方法,以捕获弱监督的设置中的组织病理学特征,并可能导致整个斜面分类结果接近甚至比完全监督的方法接近甚至更好。
translated by 谷歌翻译
在本文中,我们考虑了从长时间的视频到几分钟的长视频进行分类的问题(例如,烹饪不同的食谱,烹饪不同的食谱,进行不同的家庭装修,创建各种形式的艺术和手工艺品)。准确地对这些活动进行分类,不仅需要识别构成任务的单个步骤,还需要捕获其时间依赖性。这个问题与传统的动作分类大不相同,在传统的动作分类中,模型通常在跨越几秒钟的视频上进行了优化,并且手动修剪以包含简单的原子动作。虽然步骤注释可以使模型的培训能够识别程序活动的各个步骤,但由于长时间视频中手动注释时间界的超级注释,因此该领域的现有大规模数据集不包括此类段标签。为了解决这个问题,我们建议通过利用文本知识库(Wikihow)的遥远监督来自动确定教学视频中的步骤,其中包括对执行各种复杂活动所需的步骤的详细描述。我们的方法使用语言模型来匹配视频中自动转录的语音,以在知识库中逐步描述。我们证明,经过训练的视频模型可以识别这些自动标记的步骤(无手动监督)产生了在四个下游任务上实现卓越的概括性能的表示:识别程序活动,步骤分类,步骤预测和以自我为中心的视频分类。
translated by 谷歌翻译
少量分类需要调整从大型注释的基础数据集中学到的知识来识别新颖的看不见的类,每个类别由少数标记的示例表示。在这样的场景中,预先绘制大容量在大型数据集上的网络,然后在少数示例下向少量抵消导致严重的过度拟合。同时,在从大型标记数据集中学到的“冷冻”特征的顶部培训一个简单的线性分类器无法使模型调整到新型类的属性,有效地诱导底部。在本文中,我们向这两种流行的策略提出了一种替代方法。首先,我们的方法使用在新颖类上培训的线性分类器来伪标签整个大型数据集。这有效地“幻觉”在大型数据集中的新型类别,尽管基本数据库中未存在的新类别(新颖和基类是不相交的)。然后,除了在新型数据集上的标准交叉熵损失之外,它将在伪标记的基础示例上具有蒸馏损失的整个模型。这一步骤有效地训练了网络,识别对新型类别识别的上下文和外观提示,而是使用整个大规模基础数据集,从而克服了几次拍摄学习的固有数据稀缺问题。尽管这种方法的简单性,但我们表明我们的方法在四个成熟的少量分类基准上表现出最先进的。
translated by 谷歌翻译
We present a convolution-free approach to video classification built exclusively on self-attention over space and time. Our method, named "TimeSformer," adapts the standard Transformer architecture to video by enabling spatiotemporal feature learning directly from a sequence of framelevel patches. Our experimental study compares different self-attention schemes and suggests that "divided attention," where temporal attention and spatial attention are separately applied within each block, leads to the best video classification accuracy among the design choices considered. Despite the radically new design, TimeSformer achieves state-of-the-art results on several action recognition benchmarks, including the best reported accuracy on Kinetics-400 and Kinetics-600. Finally, compared to 3D convolutional networks, our model is faster to train, it can achieve dramatically higher test efficiency (at a small drop in accuracy), and it can also be applied to much longer video clips (over one minute long). Code and models are available at: https://github.com/ facebookresearch/TimeSformer.
translated by 谷歌翻译
Group convolution has been shown to offer great computational savings in various 2D convolutional architectures for image classification. It is natural to ask: 1) if group convolution can help to alleviate the high computational cost of video classification networks; 2) what factors matter the most in 3D group convolutional networks; and 3) what are good computation/accuracy trade-offs with 3D group convolutional networks.This paper studies the effects of different design choices in 3D group convolutional networks for video classification. We empirically demonstrate that the amount of channel interactions plays an important role in the accuracy of 3D group convolutional networks. Our experiments suggest two main findings. First, it is a good practice to factorize 3D convolutions by separating channel interactions and spatiotemporal interactions as this leads to improved accuracy and lower computational cost. Second, 3D channel-separated convolutions provide a form of regularization, yielding lower training accuracy but higher test accuracy compared to 3D convolutions. These two empirical findings lead us to design an architecture -Channel-Separated Convolutional Network (CSN) -which is simple, efficient, yet accurate. On Sports1M, Kinetics, and Something-Something, our CSNs are comparable with or better than the state-of-the-art while being 2-3 times more efficient.
translated by 谷歌翻译
There is a natural correlation between the visual and auditive elements of a video. In this work we leverage this connection to learn general and effective models for both audio and video analysis from self-supervised temporal synchronization. We demonstrate that a calibrated curriculum learning scheme, a careful choice of negative examples, and the use of a contrastive loss are critical ingredients to obtain powerful multi-sensory representations from models optimized to discern temporal synchronization of audio-video pairs. Without further finetuning, the resulting audio features achieve performance superior or comparable to the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50). At the same time, our visual subnet provides a very effective initialization to improve the accuracy of video-based action recognition models: compared to learning from scratch, our self-supervised pretraining yields a remarkable gain of +19.9% in action recognition accuracy on UCF101 and a boost of +17.7% on HMDB51.
translated by 谷歌翻译
In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly gains in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block "R(2+1)D" which produces CNNs that achieve results comparable or superior to the state-of-theart on Sports-1M, Kinetics, UCF101, and HMDB51.
translated by 谷歌翻译
We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3 × 3 × 3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.
translated by 谷歌翻译
We are witnessing a widespread adoption of artificial intelligence in healthcare. However, most of the advancements in deep learning (DL) in this area consider only unimodal data, neglecting other modalities. Their multimodal interpretation necessary for supporting diagnosis, prognosis and treatment decisions. In this work we present a deep architecture, explainable by design, which jointly learns modality reconstructions and sample classifications using tabular and imaging data. The explanation of the decision taken is computed by applying a latent shift that, simulates a counterfactual prediction revealing the features of each modality that contribute the most to the decision and a quantitative score indicating the modality importance. We validate our approach in the context of COVID-19 pandemic using the AIforCOVID dataset, which contains multimodal data for the early identification of patients at risk of severe outcome. The results show that the proposed method provides meaningful explanations without degrading the classification performance.
translated by 谷歌翻译